|
In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term seems to have been first defined by Nick Bostrom.〔Nick Bostrom (2006). ("What is a Singleton?" ). ''Linguistic and Philosophical Investigations'' 5(2): 48-54.〕 An artificial general intelligence having undergone an intelligence explosion could form a singleton, as could a world government armed with mind control and social surveillance technologies. A singleton need not directly micromanage everything in its domain; it could allow diverse forms of organization within itself, albeit guaranteed to function within strict parameters. A singleton need not support a civilization, and in fact could obliterate it upon coming to power. A singleton has both potential risks and potential benefits. Notably, a suitable singleton could solve world coordination problems that would not otherwise be solvable, opening up otherwise unavailable developmental trajectories for civilization. For example, Bostrom suggests that a singleton could hold Darwinian evolutionary pressures in check, preventing agents interested only in reproduction from coming to dominate.〔Nick Bostrom (2004). ("The Future of Human Evolution" ). ''Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing'', ed. Charles Tandy (Ria University Press: Palo Alto, California): 339-371.〕 Yet Bostrom also regards the possibility of a stable, repressive, totalitarian global regime as a serious existential risk.〔Nick Bostrom (2002). ("Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards" ). ''Journal of Evolution and Technology'' 9(1).〕 The very stability of a singleton makes the installation of a ''bad'' singleton especially catastrophic, since the consequences can never be undone. Bryan Caplan writes that "perhaps an eternity of totalitarianism would be worse than extinction".〔Bryan Caplan (2008). "The totalitarian threat". ''Global Catastrophic Risks'', eds. Bostrom & Cirkovic (Oxford University Press): 504-519.〕 ==See also== *Existential risk *Friendly AI 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Singleton (global governance)」の詳細全文を読む スポンサード リンク
|